Engineering Social Order 1 Cristiano Castelfranchi
نویسنده
چکیده
Social Order becomes a major problem in MAS and in computer mediated human interaction. After explaining the notions of Social Order and Social Control, I claim that there are multiple and complementary approaches to Social Order and to its engineering: all of them must be exploited. In computer science one try to solve this problem by rigid formalisation and rules, constraining infrastructures, security devices, etc. I think that a more socially oriented approach is also needed. My point is that Social Control and in particular decentralised and autonomous Social Control will be one of the most effective approaches. 0. The framework: Social Order Vs Social Control This is an introductory paper. I mean that I will not propose any solution to the problem of social order in engineering cybersocieties: neither theoretical solutions and even less practical solutions. I want just to contribute to circumscribe and clarify the problem, identify relevant issues and discuss some notions for a possible ontology in this domain. I take a cognitive and social perspective, however I claim that this is relevant not only for the new born computational social sciences, but for networked society and MAS. There is a dialectic relationship: on the one hand, in MAS and cybersocieties we should be inspired by human social phenomena, on the other hand, by computationally modelling social phenomena we should provide a better understanding of them. In particular I try to understand what Social Order 2 is, and to describe different approaches to and strategies for Social Order, with special attention to Social Control and its means. Since the agents (either human or artificial) are relatively autonomous, act in an open world, on the basis of their subjective and limited points of view and for their own interests or goals, Social Order is a problem. There is no possibility of application for a predetermined, “hardwired” or designed social order. Social order has to be continuously restored and adjusted, dynamically produced by and through the action of the agents themselves; this is why Social Control is necessary. There are multiple and complementary approaches to Social Order and to its engineering: all of them must be exploited. In computer science one try to solve this problem by rigid formalisation and rules, constraining infrastructures, security devices, etc. I think that a more socially oriented approach is also needed. My point is that Social Control and in particular decentralised and autonomous Social Control will be one of the most effective approaches. 1. The Big Problem: Apocalypse now I feel that the main trouble of infosocieties, distributed computing, Agent-based paradigm, etc. will be -quite soonthat of the “social order” in the virtual or in artificial society, in the net, in MASs. Currently the problem is mainly perceived in terms of “security”, and in terms of crisis, breakdowns, and traffic, but it is more general. The problem is how to obtaining from local design and programming, and from local actions, interests, and views, some desirable and relatively predictable/stable emergent results. 1 This work has been and is being developed within the ALFEBIITE European Project: A Logical Framework For Ethical Behaviour Between Infohabitants In The Information Trading Economy Of The Universal Information Ecosystem. IST1999-10298. 2 The spreading identification between "social order" and cooperation is troublesome. I use here social order as “desirable”, good social order (from the point of view of an observer or designer, or from the point of view of the participants. However, more generally social order should be conceived as any form of systemic phenomenon or structure which is sufficiently stable, or better either self-organising and self-reproducing through the actions of the agents, or consciously orchestrated by (some of) them. Social order is neither necessarily cooperative nor a “good” social function. Also systematic dis-functions (in Merton’s terminology) are forms of social order. See section 3. This problem is particularly serious in open environments and MASs, or with heterogeneous and self-interested agents, where a simple organisational solution doesn’t work. This problem has several facets: Emergent computation and indirect programming [For90]; [Cas98a]; reconciling individual and global goals [Hog97]; [Kir99]; the trade-off between initiative and control; etc.). Let me just sketch some of these perspectives on THE problem. 1.1 Towards Social Computing: Programming (with) ‘the Invisible Hand’? 3 Let me consider the problem from a computational and engineering perspective. It has been remarked how we are going towards a new “social” computational paradigm [Gas91; 98]. I believe that this should be taken in a radical way, where “social” does not means only organisation, roles, communication and interaction protocols, norms (and other forms of coordination and control); but it should be taken also in terms of spontaneous orders and self-organising structures. That is, one should consider the emergent character of computation in AgentBased Computing. In a sense, the current paradigm of computing is going beyond strict ‘programming’, and this is particularly true in the agents paradigm and in large and open MASs. On the one hand the agents acquire more and more features such as: adaptivity: either in the sense that they learn from their own experience and from previous stimuli; or in the sense that there may be some genetic recombination, mutation, and selection; or in the sense that they are reactive and opportunistic, able to adapt their goals and actions to local, unpredictable and evolving environments. autonomy and init iat ive: the agent takes care of the task/objective, by executing it when it finds an opportunity, and proactively, without the direct command or the direct control of the user; it is possible to delegate not only a specified action or task but also an objective to bring about in any way; and the agent will find its way on the basis of its own learning and adaptation, its own local knowledge, its own competence and reasoning, problem solving and discretion. distribution and decentralisation: MAS can be open and decentralised. It is neither established nor predictable which agent will be involved, which task it will adopt, and how it will execute or solve it. And during the execution the agent may remain open and reactive to incoming inputs and to the dynamics of its internal state (for example, resource shortage, or change of preferences). Assigned tasks are (in part) specified by the delegated agents: nobody knows the complete plan. Nobody entirely knows who delegated what to whom. In other words, nobody will be able to specify where, when, why, and who is running a given piece of the resulting computation. The actual computation is just emergent. Nobody directly wroten the program that is being executed. We are closer to Adam Smith’s notion of ‘the invisible hand’ than to a model of a plan or a program as a pre-specified sequence of steps to be passively executed. This is on my view the problem of Emergent Computation (EC) as it applies to DAI/MAS. Forrest [For90] presents the problem of Emergent Computation as follows: ‘The idea that interactions among simple deterministic elements can produce interesting and complex global behaviours is well accepted in sciences. However, the field of computing is oriented towards building systems that accomplish specific tasks, and emergent properties of complex systems are inherently difficult to predict and control. ... It is not obvious how architectures that have many interactions with often unpredictable and selforganising effects can be used effectively. ....The premise of EC is that interesting and useful computational systems can be constructed by exploiting interactions among agents. .....The important point is that the explicit instructions are at different (and lower) level than the phenomena of interest. There is a tension between low-level explicit computations and direct programming, and the patterns of their interaction’. Thus, there is some sort of ‘indirect programming’: implementing computations indirectly as emergent patterns. Strangely enough, Forrest following the fashion of that moment of opposing an antisymbolic paradigm to the gofAIdoes not mention DAI, AI agents or MAS at all; she just refers to connectionist models, cellular automata, biological and ALife models, and to the social sciences. However, also higher level components -complex AI agents, cognitive agents give rise precisely to the same phenomenon (like humans!). More than this, I claim that the ‘central themes of EC’ as identified by Todd [Tod93] are among the most typical DAI/MAS issues. Central themes of EC include in fact [Tod93]: • self-organisation, with no central authority to control the overall flow of computation; • collective phenomena emerging from the interactions of locally-communicating autonomous agents; • global cooperation among agents, to solve a common goal or share a common resource, being balanced against competition between them to create a more efficient overall system; • learning and adaptation (and autonomous problem solving and negotiation) replacing direct programming for building working systems; 3 This section is from [Cas98a]. • dynamic system behaviour taking precedence over traditional AI static data structures. In sum, Agent based computing, complex AI agents, and MASs are simply meeting the problems of human society: functions and ‘the invisible hand’; the problem of a spontaneous emergent order, of beneficial selforganisation, of the impossibility of planning; but also the problem of harmful self-organising behaviours. Let’s look at the same problem from other perspectives. 1.2 Modelling emergent and unaware cooperation among intentional agents Macy [Mac98] is right when he claims that social cooperation does not need agents' understanding, agreement, contracts, rational planning, collective decisions. There are forms of cooperation that are deliberated and based on some agreement (like a company, a team, an organised strike), and other forms of cooperation that are emergent: non contractual and even unaware. Modelling those forms is very important but my claim [Cas97] [Cas92a] is that it is important to model them not just among sub-cognitive agents 4 (using learning or selection of simple rules) [Ste80] [Mat92], but also among cognitive and planning agents 5 whose behaviour is regulated by anticipatory representations (the "future"). Also these agents cannot understand, predict, and govern all the global and compound effects of their actions at the collective level. Some of these effects are self-reinforcing and selforganising. I argue that it is not sufficient to put deliberation and intentional action (with intended effects) together with some reactive or rule-based or associative layer/ behaviour, let some unintended social function emerge from this layer, and let the feedback of the unintended reinforcing effects operate on this layer [Par82]. The real issue is precisely that the intentional actions of the agents give rise to functional, unaware collective phenomena (e.g., the division of labour), not (only) their unintentional behaviours. How to build unaware functions and cooperation on top of intentional actions and intended effects? How is it possible that positive results -thanks to their advantagesreinforce and reproduce the actions of intentional agents, and self-organise and reproduce themselves, without becoming simple intentions? [Els82]. This is the real theoretical challenge for reconciling emergence with cognition, intentional behavior with social functions, planning agents with unaware cooperation. At the SimSoc’97 workshop in Cortona [Cas97] I claimed that only agent based social simulation joint with AI models of agents can eventually solve this problem by formally modelling and simulating at the same time the individual minds and behaviours, the emerging collective action, structure or effect, and their feedback to shape minds and reproduce themselves. I suggested that we need more complex forms of reinforcement learning not just based on classifiers, rules, associations, etc. but operating on the cognitive representations governing the action, i.e. on beliefs and goals. My claim is precisely that "the consequences of the action, which may or may not have been consciously anticipated, modify the probability that the action will be repeated next time the input conditions are met" [Cas97; Cas98c]: Functions are just effects of the behavior of the agents, that go beyond the intended effects (are not intended) and succeed in reproducing themselves because they reinforce the beliefs and the goals of the agents that caused that behavior. 1.3 Reconciling Individual with Global Goals “Typically, (intelligent) agents are assumed to pursue their own, individual goals. On this bases, a diversity of agent architectures (such as BDI), behavioral strategies (benevolent, antagonistic, etc.), and group formation models (joint intentions, coalition formation, and others) have been developed. All these approaches involve a bottom-up perspective. The existence of a collection of agents thus depends on a dynamic network of (in most cases: bilateral) individual commitments. Global behavior (macro level) emerges from individual activities/interactions (micro level). .... In business environments, however, the behavior of the global system (i.e., on the macro level) is important as well. Typical requirements concern system stability over time, a minimum level of predictability, an adequate relationship to (human-based) social systems, and a clear commitment to aims, strategies, tasks, and processes of enterprises. Introducing agents into business information systems thus requires to resolve this conflict of bottom up and top down oriented perspectives.” 6 [Kir99] 4 By "sub-cognitive" agents I mean agents whose behaviour is not regulated by an internal explicit representation of its purpose and by explicit beliefs. Sub-cognitive agents are for example simple neural-net agents, or mere reactive agents. 5 Cognitive agents are agents whose actions are internally regulated by goals (goal-directed) and whose goals, decisions, and plans are based on beliefs. Both goals and beliefs are cognitive representations that can be internally generated, manipulated, and subject to inferences and reasoning. Since a cognitive agent may have more than one goal active in the same situation, it must have some form of choice/decision, based on some "reason" i.e. on some belief and evaluation. Notice that we use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc. 6 I would say between an individualist and a collectivist perrspective. Kirn proposes to deal with our problem an an organizational approach . This is quite traditional in MAS and is surely useful. However I claim in this paper that i s largely insufficient. This is another point of view on the same problem. It can be formulated as follows: How to reconcile individual rationality with group achievements? Given goal-autonomous agents 7, basically there are two solutions to this problem of making the agent "sensible" to the collective interest: a) to use external incentives: such as prizes, punishments, redistribution of incomes, in general rewards, for ex. money (for example to make industries sensible to the environmental problem you can put taxes on pollution), so that the agent will find convenient relatively to his/her selfish motives and utility to do something for the group (to favour the group or to do as requested by the group); b) to endow the agent with pro-social motives and attitudes (sympathy, group identity, altruism, etc.) either based on social emotions or not, either acquired (learning, socialisation) or inborn (by inheritance or design); in this case there is an intrinsic pro-group motivation. The agent is subjectively rational although not economically rationalbut ready to sacrifice 8. Human societies use both these approaches 9; this is not casual. We should experiment advantages and disadvantages of the two, to see on which domain and why one is better than the other. Experimental social simulation can give a precious contribution to cope with this problem (for ex. [Gil95; Con97; Kam00]]; but also good formal theories of spontaneous and non-spontaneous social order and of its mechanisms, and in particular theories of spontaneous and deliberated forms of “social control”, will play a major role. Clearly deontic stuff (norms, conventions, institutions, roles, commitments, obligations, rights, etc.) will have a large part in any “implementation” of social control mechanisms in virtual and artificial societies. However, on my view, this role will be improved by a more open and flexible view of deontic phenomena and by the embedding of them within the framework of social control and social order. On the one hand, one should realised how social control and order has not only normative solutions (organisation and norm are not enough); on the other side -more importantone should account for a flexible view of normative behaviour, and for a gradual approach to norms from more informal and spontaneous phenomena (like spontaneous conventions and social norms, or spontaneous decentralised social control) to more institutionalised and formal forms of deontic regulation [Cas00]. 2 Approaches and delusions There are different “philosophies” about this very complex problem; different approaches and policies. For example: • A Coordination media and infrastructures approach, where thanks to some infrastructures and media a desirable coordination among independent agents actions is obtained [Den99]. • A Social Control (SC) approach that is focused on sanctions, incentives, control, reputation, etc. • An Organizational approach, relying on roles, division of labor, pre-established multi-agent plans, negotiation, agreements, etc. • A Shared mind view of groups, teams, organisations where coordination is due to shared mental maps of the domain, task, and organisation, or to common knowledge of the individual minds • A Spontaneous Social Order approach where nobody can bear in mind, understand, monitor or plan the global resulting effects of the “invisible hand” (von Hayek) [Hay67]. First, those approaches or views are not really conceptually independent of each other: frequently one partially overlaps with an other, simply hiding some aspects or notion. For example coordination media are frequently rules and norms; and the same is true for organisational notions like “role” that are normative notions. All of them exploit very much communication. 7 i.e. self-motivated agents (self-interested but not necessarily selfish) that adopt goals only instrumentally to some goals of them (be these either selfish or altruistic). 8 This might be also objectively rational, adaptive: it depends on ecological and evolutionary factors. 9 There is a sort of intermediate or double-face phenomenon which, depending on its use or modelling, can be considered as part either of (a) or of (b): internal gratification or punishment (like guilt). If the agent does something pro-social in order to avoid guilt, this is selfish motivation and behavior (case a); but, notice that guilt feelings presuppose some pro-social goals like equity or norm respect!! If, on the contrary, the agents act fairly and honestly just for these pro-social motives (not in order to avoid regret), and them feel guilty when violate (and guilt is just a learning device for socialisation) we are in case (b). The psychological notion of intrinsic motivation or internal reward mixes up the two. Both perspectives are realistic and even compatible. Second, in human groups -as I saidall these approaches are used, and Social Order is the result of both spontaneous dynamics and orchestrated and designed actions and constrains. It can be the result of SC and of other mechanisms like the “invisible hand”, social influence and learning, socialization etc. But SC itself is ambiguous: it can be deliberated, official and institutional, or spontaneous, informal, and even unaware. I will completely put aside here education and social learning, and prosocial built-in motives (except for normative ones), although they play a very important role for shaping social order. To be schematic, let’s put at one extreme the merely self-organising forms unrelated to SC (ex.. market equilibrium) 10; on the other extreme the deliberated and planned SC; in between there are forms of spontaneous, self-organising SC: the invisible hand SOCIAL ORDER formal social control informal social control selforganising orchestrated In IT all of these approaches will prove to be useful. For the moment the most appealing solutions are: • on the one hand, what I would like to call the “organisational” solution (pre-established roles, some hierarchies, clear responsibilities, etc.. This is more “designed”, engineered, and rather reassuring! • on the other hand, the “normative” or “deontic” solution, based on the formalization of permissions, obligations, authorisation, delegation, etc. logically controllable in their coherence; 11 • finally, the strictly “economic” solution based on free rational agent dealing with utilities and incentives in some form of market. The problem is much more complex, and -in my viewseveral other convergent solutions should be explored. However, I will consider here only one facet of the problem. My view is that “normative” SC but spontaneous and decentrealised has to play a major role both in cybersocieties and in artificial social systems. In IT there are some illusions about possible solutions. The Illusion of Control: Security Vs Moral i ty The first, spontaneous approach of engineers and computer scientists to those issues is that of increasing security by certification, protocols, authentication, cryptography, central control, rigid rules, etc. Although some of these measures are surely useful and needed, as I said, I believe that the idea of a total control and a technical prevention against chaos, conflicts and deception in computers is unrealistic and even self-defeating in some case, like for building trust. Close to this illusion is the formal-norm i l lusion that I have already criticised (see later 5.). The socio-anthropological illusion: let’s embed technology in a social, moral and legal human context In the area of information systems a perspective is already developing aimed at embedding the information system in a complex socio-cultural environment where there are -on the top of the technical and security layerother layers relative to legal aspects, social interaction, trust, and morality [Har95; Lei96]. For sure this is a correct view. The new technology can properly work for human purposes only if integrated and situated in human morality, culture and law. However this is not enough. For Intelligent Normative Supports (Agents) I believe [Cas00b] that what is needed is some attempt to “incorporate” part of these layers and issue in the technology itself. 10 Let’s ignore here other factors of Social Order like constraining infrastrucuires for coordination (see section 4.). 11 These two solutions can be strongly related one with the other, since one can give a normative interpretation and realisation of roles, hierarchies, organisations. Especially within the intelligent and autonomous agents paradigm, I believe that it is both possible and necessary to model these typically human and social notions. In order to effectively support human cooperation which is strongly based on social, moral, and legal notionscomputers must be able to model and “understand” at least partially what happens among the users. They should be able to manage -and then partially “understand”for ex. permissions, obligations, power, roles, commitments, trust. Moreover, to cope with the open, unpredictable social interaction and collective activity that will emerge among them, the artificial agents themselves should base this interaction on something like organisation, role, norms, etc. This is in fact what is happening in the domain of agents and MAS, where these topics are in the focus of theoretical and formal modelling, and of some implementation. (Just to give some example see DEON Ws, ModelAge project, the IP-CNR work; Jennings; Moses; Tennenholtz; Singh; Boman; etc. ].
منابع مشابه
Collective representational content for shared extended mind Action editors : Luca Tummolini and Cristiano Castelfranchi
Some types of species exploit the external environment to support their cognitive processes, in the sense of patterns created in the environment that function as external mental states and serve as an extension to their mind. In the case of social species the creation and exploitation of such patterns can be shared, thus obtaining a form of shared mind or collective intelligence. This paper exp...
متن کاملThe cognitive and behavioral mediation of institutions : Towards 3 an account of institutional actions 4 Action editors : Luca Tummolini and Cristiano Castelfranchi 5
11 The aim of this paper is to provide an analysis of institutional actions from the standpoint of cognitive science. The notion of con12 stitutive rules have been proposed to describe the conceptual nature of institutions. In this paper it is extended to cover specific processes 13 of ‘recognition’ that provide the agents with additional artificial powers. The power of doing an institutional a...
متن کاملA simple logic of trust based on propositional assignments
Cristiano Castelfranchi and Rino Falcone introduced an influential cognitive theory of social trust that is based on the concepts of belief, goal, ability, willingness and opportunity. In this paper we propose a simple logic of belief and action that allows to express these concepts. While our logic of belief is standard, our logic of action has a very simple kind actions: actions setting the t...
متن کاملProlegomena for a Logic of Trust and Reputation
Reputation and trust are useful instruments in multi-agent systems to evaluate agent behaviour. Most of the works on trust and reputation adopt a quantitative representation of these concepts. Trust and reputation are commonly simplified to a numerical representation loosing important properties of these concepts. The aim of this paper is therefore to provide a qualitative formal analysis of tr...
متن کاملSelf-Organising Mechanisms from Social and Business/Economics Approaches
This paper discusses examples of socially inspired self-organisation approaches and their use to build socially-aware, self-organising computing systems. The paper presents different mechanisms originating from existing social systems, such as stigmergy from social insects behaviours, epidemic spreading, gossiping, trust and reputation inspired by human social behaviours, as well as other appro...
متن کاملNormative reputation and the costs of compliance
In this paper, the role of normative reputation in reducing the costs of complying with norms will be explored. In previous simulations (Conte & Castelfranchi 1995), in contrast to a traditional view of norms as means for increasing co-ordination among agents, the effects of normative and non-normative strategies in the control of aggression among agents in a common environment was confronted. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000